-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storageclusterpeer #2466
Storageclusterpeer #2466
Conversation
Skipping CI for Draft Pull Request. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rewantsoni The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ba6d227
to
059074b
Compare
d8b406f
to
fc34c02
Compare
Last 3 commits are up for review |
Leave a note in the PR description saying which PR this is dependent on. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reviewing in phases, will do a more thorough one at the earliest/after draft.
controllers/suite_test.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this file is required.
PROJECT
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be better to add the operator-sdk command that you used for generating the files to this commit msg.
// StorageClusterPeerReconciler reconciles a StorageClusterPeer object | ||
// nolint:revive |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls do mention what aspect of the linter we are skipping in a new line.
} | ||
|
||
//+kubebuilder:rbac:groups=ocs.openshift.io,resources=storageclusterpeers,verbs=get;list;watch;create;update;patch;delete | ||
//+kubebuilder:rbac:groups=ocs.openshift.io,resources=storageclusterpeers/status,verbs=get;update;patch | ||
//+kubebuilder:rbac:groups=ocs.openshift.io,resources=storageclusterpeers/finalizers,verbs=update | ||
//+kubebuilder:rbac:groups=ceph.rook.io,resources=cephblockpools;,verbs=list;create;update | ||
//+kubebuilder:rbac:groups=core,resources=secrets,verbs=get |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think get
alone works, basically underlying infra lists the existing and keeps on updating the cache so you need list
perms as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
During testing it, it worked with get as far as I recall, I will check it once again and update it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested the changes, only using get
works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack, maybe only client from runtime manager require both list & get. Pls recheck that there's no existing rbac for list of secrets and resolve the comment.
// Apply status changes to the StorageClassRequest | ||
statusError := r.Client.Status().Update(r.ctx, r.storageClusterPeer) | ||
if statusError != nil { | ||
r.Log.Info("Failed to update StorageClassRequest status.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
storageClassRequest, a typo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will fix it
|
||
func (r *StorageClusterPeerReconciler) reconcileCephBlockPool() error { | ||
|
||
for _, cephBlockPool := range r.cephBlockPoolList.Items { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so mirroring on block pools is all or none?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we decided that this will be the initial design, if required we might add annotations to whitelist/blacklist the blockpools
} | ||
|
||
func (r *StorageClusterPeerReconciler) newExternalClient() (*providerClient.OCSProviderClient, error) { | ||
ocsClient, err := providerClient.NewProviderClient(r.ctx, r.storageClusterPeer.Spec.OCSAPIServerURI, time.Second*10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need to construct a grpc client on every reconcile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to APIServerURI, which will be different for different clusters
d5cac87
to
0847bb6
Compare
Testing:
After applying StorageClusterPeer:
After deleting the StorageClusterpeer:
|
/test ocs-operator-bundle-e2e-aws |
labels: | ||
name: storageclusterpeer-sample | ||
spec: | ||
ocsApiServerUri: 10.0.0.0/31659 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
intentionally mentioned /
inplace of :
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, updated it. thanks for pointing it out
} | ||
|
||
// SetupWithManager sets up the controller with the Manager. | ||
func (r *StorageClusterPeerReconciler) SetupWithManager(mgr ctrl.Manager) error { | ||
|
||
enqueueStorageClusterPeerRequest := handler.EnqueueRequestsFromMapFunc( | ||
func(context context.Context, obj client.Object) []reconcile.Request { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can use ctx
inplace of context
func(context context.Context, obj client.Object) []reconcile.Request { | ||
// Get the StorageClusterPeer objects | ||
scpList := &ocsv1.StorageClusterPeerList{} | ||
err := r.Client.List(context, scpList, &client.ListOptions{Namespace: obj.GetNamespace()}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so you'll have multiple scp in the same namespace? more like, if you have two blockpools and first has two peers, second has only one peer, should you enqueue all the peer CRs (all 3) or can enqueue peers corresponding to block pool only?
let me know if my understanding is incorrect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A single storageClusterPeer will peer all the block pools created, if we have two storageClusterPeer, it will peer all the block pools present to both peer clusters
|
||
//remove finalizer | ||
r.Log.Info("removing finalizer from the StorageClusterPeer resource") | ||
r.storageClusterPeer.SetFinalizers(slices.Filter([]string{}, r.storageClusterPeer.GetFinalizers(), func(s string) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this SetFinalizers
defined? why can't we use controllerutil.SetFinalizer
and conditionally update the resource?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SetFinalizers
is a pre-defined function, it is defined as:
func (meta *ObjectMeta) SetFinalizers(finalizers []string) { meta.Finalizers = finalizers }
Updated to use helper function
slices.Contains(r.storageClusterPeer.GetFinalizers(), ocsv1.StorageClusterPeerFinalizer) | ||
if !slices.Contains(r.storageClusterPeer.GetFinalizers(), ocsv1.StorageClusterPeerFinalizer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
redundant op?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, updated it
// add finalizer | ||
slices.Contains(r.storageClusterPeer.GetFinalizers(), ocsv1.StorageClusterPeerFinalizer) | ||
if !slices.Contains(r.storageClusterPeer.GetFinalizers(), ocsv1.StorageClusterPeerFinalizer) { | ||
r.Log.V(-1).Info("finalizer missing on the StorageClusterPeer resource, adding...") | ||
r.storageClusterPeer.SetFinalizers(append(r.storageClusterPeer.GetFinalizers(), ocsv1.StorageClusterPeerFinalizer)) | ||
if err := r.update(r.storageClusterPeer); err != nil { | ||
return reconcile.Result{}, fmt.Errorf("failed to update StorageClusterPeer with finalizer: %v", err) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls use helper from controllerutil.
_, err := ctrl.CreateOrUpdate(r.ctx, r.Client, &cephBlockPool, func() error { | ||
cephBlockPool.Spec.Mirroring.Enabled = true | ||
cephBlockPool.Spec.Mirroring.Mode = "image" | ||
return nil | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any reason in enabling mirroring daemon via gRPC but not locally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mirror daemon is required on the secondary Cluster not on primary cluster
|
||
r.storageClusterPeer.Status.Phase = ocsv1.StorageClusterPeerConfiguring | ||
|
||
if err := r.reconcileCephBlockPool(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe reconcileOwnBlockPool
sounds better as other one is being called reconcilePeerBlockPool
🤔?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about reconcileLocalBlockPool
?
return err | ||
} | ||
|
||
_, err := ocsClient.PeerBlockPool(r.ctx, generateSecretName(cephBlockPool.Name, clusterID), secret.Data["pool"], secret.Data["token"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in existing rdr how is the secret name generated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vbnrh @umangachapagain could you help me in answering this question?
|
||
func (r *StorageClusterPeerReconciler) reconcilePeerBlockPool(ocsClient *providerClient.OCSProviderClient) error { | ||
|
||
clusterID := util.GetClusterID(r.ctx, r.Client, &r.Log) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if clusterID is required for each reconcile, why not store it in the struct itself as it's same throughout the lifetime of cluster?
on another note, I did come across this util func, seems a bit misleading, it'd have been better if it returns err on failure not sure why func dependents are ok with empty val.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, even I am not sure why we are okay with returning empty val,
@iamniting @malayparida2000 could you elaborate on why we are returning empty val?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This func is called at one place earlier while constructing configmap. I am not able to recall why we were returning an empty value. As you are already checking if the value is empty and returning the reconcile that should be good.
5f71e7e
to
c08bb37
Compare
c08bb37
to
c078f1c
Compare
c078f1c
to
4302e62
Compare
4302e62
to
247aeb1
Compare
247aeb1
to
ab281a3
Compare
|
||
//remove finalizer | ||
r.Log.Info("removing finalizer from the StorageClusterPeer resource") | ||
controllerutil.RemoveFinalizer(r.storageClusterPeer, ocsv1.StorageClusterPeerFinalizer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
incorrect I guess, you are removing finalizer from in-memory object but not on api server, need to update here.
0393c64
to
283faaa
Compare
Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
ran the command: operator-sdk create api --group ocs --version v1 --kind StorageClusterPeer --resource --controller Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
283faaa
to
e9b03f3
Compare
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Closing this in favour of #2677 |
add a new CR and controller to peer two block pools with the same name
operator-sdk create api --group ocs --version v1 --kind StorageClusterPeer --resource --controller
to create the CR and controllerDepends on (in Order)